Simultaneous feature selection and classification via Minimax Probability Machine
نویسندگان
چکیده
منابع مشابه
Simultaneous feature selection and classification via Minimax Probability Machine
This paper presents a novel method for simultaneous feature selection and classification by incorporating a robust L1-norm into the objective function of Minimax Probability Machine (MPM). A fractional programming framework is derived by using a bound on the misclassification error involving the mean and covariance of the data. Furthermore, the problems are solved by the Quadratic Interpolation...
متن کاملSparse Greedy Minimax Probability Machine Classification
The Minimax Probability Machine Classification (MPMC) framework [Lanckriet et al., 2002] builds classifiers by minimizing the maximum probability of misclassification, and gives direct estimates of the probabilistic accuracy bound Ω. The only assumptions that MPMC makes is that good estimates of means and covariance matrixes of the classes exist. However, as with Support Vector Machines, MPMC i...
متن کاملRobust Minimax Probability Machine Regression Robust Minimax Probability Machine Regression
We formulate regression as maximizing the minimum probability (Ω) that the true regression function is within ±2 of the regression model. Our framework starts by posing regression as a binary classification problem, such that a solution to this single classification problem directly solves the original regression problem. Minimax probability machine classification (Lanckriet et al., 2002a) is u...
متن کاملTransductive Minimax Probability Machine
The Minimax Probability Machine (MPM) is an elegant machine learning algorithm for inductive learning. It learns a classifier that minimizes an upper bound on its own generalization error. In this paper, we extend its celebrated inductive formulation to an equally elegant transductive learning algorithm. In the transductive setting, the label assignment of a test set is already optimized during...
متن کاملMinimax Probability Machine
When constructing a classifier, the probability of correct classification of future data points should be maximized. In the current paper this desideratum is translated in a very direct way into an optimization problem, which is solved using methods from convex optimization. We also show how to exploit Mercer kernels in this setting to obtain nonlinear decision boundaries. A worst-case bound on...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: International Journal of Computational Intelligence Systems
سال: 2010
ISSN: 1875-6883
DOI: 10.2991/ijcis.2010.3.6.6